1,005 research outputs found

    Logical fallacies as informational shortcuts

    Get PDF
    “The original publication is available at www.springerlink.com”. Copyright Springer DOI: 10.1007/s11229-008-9410-yThe paper argues that the two best known formal logical fallacies, namely denying the antecedent (DA) and affirming the consequent (AC) are not just basic and simple errors, which prove human irrationality, but rather informational shortcuts, which may provide a quick and dirty way of extracting useful information from the environment. DA and AC are shown to be degraded versions of Bayes’ theorem, once this is stripped of some of its probabilities. The less the probabilities count, the closer these fallacies become to a reasoning that is not only informationally useful but also logically valid.Peer reviewe

    Logica e Pensiero Visivo

    Get PDF
    Peer reviewe

    Inventing the Educational Subject in the 'Information Age'

    Get PDF
    This paper asks the question of how we can situate the educational subject in what Luciano Floridi has defined as an ‘informational ontology’ (Floridi in The philosophy of information. Oxford University Press, Oxford, 2011a). It will suggest that Jacques Derrida and Bernard Stiegler offer paths toward rethinking the educational subject that lend themselves to an informational future, as well as speculating on how, with this knowledge, we can educate to best equip ourselves and others for our increasingly digital world. Jacques Derrida thought the concept of the subject was ‘indispensable’ (Derrida in The structuralist controversy: the languages of criticism and the sciences of man. Johns Hopkins Press, Baltimore, 1970, 272) as a function but did not subscribe to or accept any particular theory of how a subject could be defined or developed because it was always situated in and as a context. Following Derrida, Bernard Stiegler explains in Technics and Time: 1 that ‘the relation binding the “who” and the “what” is invention’ (Stiegler in Technics and time 1: the fault of epimetheus. Stanford University Press, Stanford, 1998, 134). As such, the separation between self and world can be seen as artificial, including if this world is perceived wholly or partly as technological, digital or informational. If this is the case, a responsibility is placed on the educator and their part in ‘inventing’ this distinction (or its absence) for future generations. How this invention of the educational subject is negotiated is therefore one the many philosophical tasks for digital pedagogy

    On malfunctioning software

    Get PDF
    Artefacts do not always do what they are supposed to, due to a variety of reasons, including manufacturing problems, poor maintenance, and normal wear-and-tear. Since software is an artefact, it should be subject to malfunctioning in the same sense in which other artefacts can malfunction. Yet, whether software is on a par with other artefacts when it comes to malfunctioning crucially depends on the abstraction used in the analysis. We distinguish between “negative” and “positive” notions of malfunction. A negative malfunction, or dysfunction, occurs when an artefact token either does not (sometimes) or cannot (ever) do what it is supposed to. A positive malfunction, or misfunction, occurs when an artefact token may do what is supposed to but, at least occasionally, it also yields some unintended and undesirable effects. We argue that software, understood as type, may misfunction in some limited sense, but cannot dysfunction. Accordingly, one should distinguish software from other technical artefacts, in view of their design that makes dysfunction impossible for the former, while possible for the latter

    The explanation game: a formal framework for interpretable machine learning

    Get PDF
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a (conditionally) optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions

    Artificial Evil and the Foundation of Computer Ethics

    Get PDF
    Moral reasoning traditionally distinguishes two types of evil: moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war, torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in cyberspace, a new class of interesting and important examples of hybrid evil has come to light. In this paper, it is called artificial evil (AE) and a case is made for considering it to complement ME and NE to produce a more adequate taxonomy. By isolating the features that have led to the appearance of AE, cyberspace is characterised as a self-contained environment that forms the essential component in any foundation of the emerging field of Computer Ethics (CE). It is argued that this goes some way towards providing a methodological explanation of why cyberspace is central to so many of CE’s concerns; and it is shown how notions of good and evil can be formulated in cyberspace. Of considerable interest is how the propensity for an agent’s action to be morally good or evil can be determined even in the absence of biologically sentient participants and thus allows artificial agents not only to perpetrate evil (and for that matter good) but conversely to ‘receive’ or ‘suffer from’ it. The thesis defended is that the notion of entropy structure, which encapsulates human value judgement concerning cyberspace in a formal mathematical definition, is sufficient to achieve this purpose and, moreover, that the concept of AE can be determined formally, by mathematical methods. A consequence of this approach is that the debate on whether CE should be considered unique, and hence developed as a Macroethics, may be viewed, constructively, in an alternative manner. The case is made that whilst CE issues are not uncontroversially unique, they are sufficiently novel to render inadequate the approach of standard Macroethics such as Utilitarianism and Deontologism and hence to prompt the search for a robust ethical theory that can deal with them successfully. The name Information Ethics (IE) is proposed for that theory. It is argued that the uniqueness of IE is justified by its being non-biologically biased and patient-oriented: IE is an Environmental Macroethics based on the concept of data entity rather than life. It follows that the novelty of CE issues such as AE can be appreciated properly because IE provides a new perspective (though not vice versa). In light of the discussion provided in this paper, it is concluded that Computer Ethics is worthy of independent study because it requires its own application-specific knowledge and is capable of supporting a methodological foundation, Information Ethics
    • 

    corecore